8 research outputs found

    Harnessing Checker Hierarchy for Reliable Microprocessors

    Get PDF
    Traditional fault-tolerant multi-threading architectures provide good fault tolerance by re-executing all the computations. However, such a full re-execution significantly increases the demand on the processor resources, resulting in severe performance degradation. To address this problem, this dissertation presents Active Verification Management (AVM) approaches that utilize a checker hierarchy to increase its performance with a minimal effect on the overall reliability. Based on a simplified queueing model, AVM employs a filter checker which prioritizes the verification candidates to selectively do verification. This dissertation proposes three filter checkers - based on (1) result usage, (2) result bitwidth, and (3) result anomaly - that exploit correctness-criticality metrics and anomaly speculation. Binary Correctness Criticality (BCC) and Likelihood of Correctness Criticality (LoCC) are metrics that quantify whether an instruction is important for reliability or how likely an instruction is correctness-critical, respectively. Based on the BCC, a result-usage-based filter checker mitigates the verification workload by bypassing instructions that are unnecessary for correct execution. Computing the LoCC is accomplished by exploiting information redundancy of compressing computationally useful data bits. Numerical significance hints let the result-bitwidth-based filter checker guide a verification priority effectively before the re-execution process starts. A result-anomaly-based filter checker exploits a value similarity property, which is defined by a frequent occurrence of partially identical values. Based on the biased distribution of similarity distance measure, this dissertation further investigates another application to exploit similar values for soft error tolerance with anomaly speculation. Extensive measurements show that the majority of instructions produce values that are different from the previous result value only in a few bits. Experimental results show that the proposed schemes accelerate the processor to be 180% faster than traditional fully-fault-tolerant processor, with a minimal impact on the overall soft error rate. With no AVM, congestion at the checker badly affects performance, to the tune of 57%, when compared to that of a non-fault-tolerant processor. These results explain that the proposed AVM has the potential to solve the verification congestion problem when perfect fault coverage is not needed

    Experimental Research Testbeds for Large-Scale WSNs: A Survey from the Architectural Perspective

    No full text
    Wireless sensor networks (WSNs) have a significant potential in diverse applications. In contrast to WSNs in a small-scale setting, the real-world adoption of large-scale WSNs is quite slow particularly due to the lack of robustness of protocols at all levels. Upon the demanding need for their experimental verification and evaluation, researchers have developed numerous WSN testbeds. While each individual WSN testbed contributes to the progress with its own unique innovation, still a missing element is an analysis on the overall system architecture and methodologies that can lead to systematic advances. This paper seeks to provide a framework to reason about the evolving WSN testbeds from the architectural perspective. We define three core requirements for WSN testbeds, which are scalability, flexibility, and efficiency. Then, we establish a taxonomy of WSN testbeds that represents the architectural design space by a hierarchy of design domains and associated design approaches. Through a comprehensive literature survey of existing prominent WSN testbeds, we examine their best practices for each design approach in our taxonomy. Finally, we qualitatively evaluate WSN testbeds for their responsiveness to the aforementioned core requirements by assessing the influence by each design approach on the core requirements and suggest future directions of research

    Cache-assisted Mobile Edge Computing over Space-Air-Ground Integrated Networks for Extended Reality Applications

    Full text link
    Extended reality-enabled Internet of Things (XRI) provides the new user experience and the sense of immersion by adding virtual elements to the real world through Internet of Things (IoT) devices and emerging 6G technologies. However, the computational-intensive XRI tasks are challenging for the energy-constrained small-size XRI devices to cope with, and moreover certain data requires centralized computing that needs to be shared among users. To this end, we propose a cache-assisted space-air-ground integrated network mobile edge computing (SAGIN-MEC) system for XRI applications, consisting of two types of edge servers mounted on an unmanned aerial vehicle (UAV) and low Earth orbit (LEO) equipped with cache and the multiple ground XRI devices. For system efficiency, the four different offloading procedures of the XRI data are considered according to the type of information, i.e., shared data and private data, as well as the offloading decision and the caching status. Specifically, the private data can be offloaded to either UAV or LEO, while the offloading decision of the shared data to the LEO can be determined by the caching status. With the aim of maximizing the energy efficiency of the overall system, we jointly optimize UAV trajectory, resource allocation and offloading decisions under latency constraints and UAV's operational limitations by using the alternating optimization (AO)-based method along with Dinkelbach algorithm and successive convex optimization (SCA). Via numerical results, the proposed algorithm is verified to have the superior performance compared to conventional partial optimizations or without cache

    Multi-Channel Packet-Analysis System Based on IEEE 802.15.4 Packet-Capturing Modules

    No full text
    There have been increasing demands for research into multi-channel-based wireless sensor network protocols and applications to support requirements such as increased throughput and real-time or reliable transmission. Researchers or developers of these protocols and applications have to simultaneously analyze the exchanged packets for correctness of both their contents and message exchange timelines. However, if developers were to use multiple conventional single-channel packet sniffers for this purpose, debugging during development and the verification process becomes extremely tedious and difficult because of the need to check the correctness of the protocols over multiple channels individually. Therefore, we present a multi-channel packet-analysis system (MPAS) that helps in debugging and verification for multi-channel protocols or applications. Wireless packets are detected and timestamped by each sniffer module in the MPAS for each channel, and packets are preprocessed and transmitted to a GUI-based analyzer, which then parses the received packets and shows them in order. We present the design and implementation results of the MPAS and evaluate its performance by comparing it against a widely used packet sniffer

    HIOS

    No full text
    corecore